Search Results: "blade"

2 January 2011

Lucas Nussbaum: Giving up on Ruby packaging

I have finally reached a decision regarding my involvement in the Debian Ruby packaging efforts. I have decided to stop. This has been a very hard decision to make. I have invested huge amounts of time in that work over the years. I still love the language, and will continue to use it on a daily basis for my own developments. I still hope that it will succeed. I know that some people will be disappointed by that decision (and that others will think your work was useless anyway, people should use RVM and rubygems ). But I also know that I won t be able to push for all the required changes alone. I just don t have the time, nor the motivation. For the record, here are the changes I would have liked to see in the Ruby community. The core Ruby development community should mature.
The core Ruby development community is still dominated by Japanese developers. While not a bad thing in itself, it is easily explained by the fact that the main development mailing list, where most of the important decisions are taken, is in japanese. ruby-dev@ should be closed, and all the technical discussions should happen on the english-speaking ruby-core@ list instead. The release management process should also improve. Currently, it looks like a total mess. The following Ruby development branches are actively maintained:
ruby_1_8 (106 commits over the last six months)
ruby_1_8_6 (4 commits over the last six months)
ruby_1_8_7 (35 commits over the last six months)
ruby_1_9_1 (4 commits over the last six months)
ruby_1_9_2 (227 commits over the last six months)
trunk (1543 commits over the last six months) While the state of the ruby_1_8_6 and ruby_1_9_1 branches is clear (very important bugfixes only), the state of all of the other branches is rather unclear.
What s the stable Ruby branch? 1.8 or 1.9? If it s 1.9, why are people still actively developing in the ruby_1_8 branch? How long will they continue to be maintained in parallel, dividing the manpower? Is a Ruby 1.8.8 release to be expected? Will it be ABI/API compatible with 1.8.7? Is the ruby_1_8_7 branch really bugfixes-only? How much testing of it has been done? If it s bugfixes-only and regression-free, I should push it to Debian squeeze, due to be released in a few weeks. But would you recommend that? Due to past breakages in the ruby_1_8_7 branch, it s unlikely that we will do it.
Is the ruby_1_9_2 a regression-free, bugfix-only branch? If yes, isn t 227 commits over 6 months a lot? What will be the version of the next release of trunk ? When is it expected? Will it be ABI-compatible with the current ruby_1_9_2 branch? API-compatible?
New releases in the 1.8.7 and 1.9.2 branches were done on december 25th. Why were they no betas or RCs allowing wider testing? How much testing has been done behind the scenes? Most of those questions have no clear answer. The Ruby development community should build a common understanding of the status of the various branches, and of their release expectations. Releasing on december 25th of each year sounds fun, but is releasing when everybody is on vacation really a good idea? It would be fantastic to have something similar to Python Enhancement Proposals in the Ruby community. But having open discussions in english about the major issues would already be great. Ruby is not just the interpreter.
The Ruby development community should clearly define what the Ruby platform is. There are some big players, like Rails, and newer interpreter releases should not be done before ensuring that those big players still work. Also, since we have alternative Ruby interpreters, like JRuby, Rubinius and MacRuby, we need a clear process on how they integrate with the rest of the ecosystem. For example, having each of them rely on their own outdated fork of the whole stdlib is ridiculous, since it s not where they compete. The Ruby community should acknowledge that RVM and Rubygems are not for everybody. People who say so should be laughed at. Of course, RVM and Rubygems are nice tools for some people. But it is completely wrong to believe that compiling from source using RVM should be the standard way of installing Ruby, or that all people interested in installing Redmine should know that Ruby has its own specific packaging system. The Ruby community should work with their target platforms to improve how Ruby is distributed instead of reinventing the wheel. That includes Debian, but also RedHat-based distros, for example. It is likely that it won t be possible to reach a one-size-fits-all situation. But that s real life. Some people in the Ruby community should stop behaving like assholes. As one of the Debian Ruby maintainers, I have been routinely accused of creating crippled packages on purpose (FTR, I don t think that the Debian packages are crippled, despite what the rumors says). Debian is not the only target of that. Just yesterday, someone called for abandonning YARV (the new Ruby VM in Ruby 1.9), calling it Yet Another Random Vailure. This kind of comments is really hurting the people who are investing their free time in Ruby, and is turning away people who consider getting involved. In Debian, we have had a lot of problems getting people to help with Ruby maintenance since they are getting shit from the community all the time. So, what s the future for Ruby in Debian? Update: there s also a number of interesting comments about this post on this site.
Update 2: First, thanks a lot for all the interesting comments. I will make some follow-up posts trying to summarize what was said. It seems that this post also triggered some reactions on ruby-core@, with Charles Olivier explaining the JRuby stdlib fork, and Yui Naruse clarifying that all questions are welcomed on ruby-core@. This is great, really.

26 November 2010

Leo 'costela' Antunes: My kingdom for a VGA cable

So you have two geeks in a university room after a relatively late and (at least for one of them) unproductive learning session. It s just natural that they decide to kick back and watch some mind-numbingly stupid geek series, which in this case happened to be Stargate SG-1 (so absurdly shitty its actually very entertaining).
The first lazy geek instinct is to just watch it on the laptop that has the file, which with its 11 display and shitty speakers doesn t turn out to be a great idea. The next try involves the other laptop, but a 13 screen isn t that big of an improvement.
Since the room our intrepid heroes are in happens to have a pretty decent built-in projector and a couple of small but still a lot better than a laptop s Bose speakers, the obvious next step would be using it. The only problem is the lack of a VGA cable. Inspired by the brief sight of MacGyver on the 11 screen, one particularly enterprising geek comes up with the challenge of making a VGA cable out of the only material available at the time: one horribly yellow cat5 ethernet cable.
Being the helpful little extra-dimensional entity that it is, the internet happily provided all the needed information and after some slight problems trying to appropriately deprive the cat5 of its connectors (no scissors and no blades of any kind in sight) and some annoying and manual sticking-cable-to-socket action what did you mother tell you about sticking things in sockets? our reluctant hackers get it right: it's ALIIIIIVEEEE!!! The final solution looked like this: nothing like cable salad for dinner And if you re wondering where those white wires came from, one final touch of ber-hackerdom: notepads have never been more useful This might seem like overkill, but after a nice nice 4 hour movies and series marathon, we can safely say it was totally worth it (but no, we didn t stand 4 hours of Stargate; even geeks have their limits). Just in case the Instructables page gets hosed at some point, here s the invaluable connection diagram, originally scraped off of a since dead Geocities page.

6 November 2010

Jurij Smakov: DVD media for SunBlade 1000

This might save someone a good deal of sifting through ancient posts on obscure forums: if you wonder why your SunBlade 1000 workstation refuses to boot off a freshly burned DVD, keep in mind that the standard-issue DVD drive in those machines (typically Toshiba SD-M1401) is pretty picky about the DVD media type used. Trying to boot off DVD-R discs invariably failed for me with a "read failed" error, while the same image burned onto a DVD+R blank worked like a charm.

2 November 2010

Sam Hartman: Review of Git DPM

I ve been looking at git dpm. It s a tool for managing Debian packages in git. The description promises the world: use of git to maintain both a set of patches to some upstream software along with the history of those patches both at the same time. It also promises the ability to let me share my development repositories. The overhead is much less than topgit. My feelings are mixed. It does deliver on its promise to allow you to maintain your upstream patches and to use git normallyish. It produces nice quilt patch series. In order to understand the down side, it s necessary to understand a bit about how it works. It maintains your normal branch roughly the way you do things today. There s also a branch that is rebased often that is effectively the quilt series. It starts from the upstream and has a commit for every patch in your quilt series containing exactly the contents of that patch. This branch is merged into your debian branch when you run git dpm update-patches. The down side is that it makes the debian branch kind of fragile. If you commit an upstream change to the Debian branch it will be reverted by the next git dpm update-patches. The history will not be lost, but unless you have turned that change into a commit along your patches branch, git dpm update-patches will remove the change without warning. This can be particularly surprising if the change in question is a merge from the upstream branch. In that case, the merged in changes will all be lost, but since the tip of the upstream branch is in your history, then future merges will not bring them back. If you either also merge your changes into the patches branch, or tell git dpm about a new upstream, then the changes will reappear at the next git dpm update-patches. The other fragility is that rebasing your debian branch can have really unpleasant consequences. The problem is that rebase removes merge commits: it assumes that the only changes introduced by merge commits are resolution of conflicts. However, git dpm synthesizes merge commits. In particular, it takes the debian directory from your debian branch, plus the upstream sources from your upstream branch, plus all the patches you applied and calls that your new debian branch. There s also a metadata file that contains pointers to the upstream branch and the quilt patch series branch. In addition, debian/patches gets populated. Discarding this commit completely would simply roll you back to the previous state of your patches. However, this loses history! There is no reference that is left pointing to the patches branch; the only reason it is still referenced is that it s a parent of the commit rebase is throwing away. This isn t really all that unusual; if you merged in any branch, deleted the branch and rebased away both the merge and the commits from the branch, you d lose history. I do find it a bit easier to do with git dpm. However, it s more likely that you ll manage to rebase and keep the commits from the patches branch. In particular, if the horizon of your rebase includes a merge of the patches branch,then for every patch that involves changes to the debian branch from the point where rebase starts rewriting history, you will see a commit included by your rebase. My suspicion is that this commit is more likely to generate conflicts than usual because it s a patch on top of the upstream sources being applied to a partially patched debian branch. However I have not fully worked through things: it may be that such a patch is exactly as likely to cause conflicts as it would in a git dpm update-patches operation. If that succeeds and you do not manage to fail to pick one of these commits, you ll end up with roughly the same debian sources you started with. There are two key exceptions: the git dpm metadata file is not updated and your debian/patches is not consistent with your sources. The first means you re in the state of the previous paragraph: git dpm update-patches will blow away your changes. The second means that any source package you produce is going to be kind of funky if not out right broken. It is possible to recover from git dpm plus rebase. I think if you git dpm checkout-patches, then cherry pick the patch related commits from your debian branch that were introduced by the rebase, then git dpm update-patches, you ll be in a sane state. Obviously you could also recover by finding the pre-rebase commit and resetting there. I m assuming though that you actually needed to rebase for some reason. Of course git dpm does involve a space penalty. You generate a commit for every patch in your quilt series for every version of the upstream sources you deal with. Also, you will introduce and store every version of the quilt series directly in your debian directory. The extra commits probably aren t a big deal: the blobs probably delta compress well. I m not sure though that the quilt patch blobs will compress well with other things. In my opinion the ability to get nice quilt patches is well worth the space penalty. In conclusion, git dpm is a sharp tool. Actually, I think that s understating things. Git dpm involves a bunch of rotating blades. You stick your face and foot between the blades, and if you use it correctly, there s exactly enough clearance that nothing gets damaged and you get some fairly neat results. Other outcomes are available. I m still trying to evaluate if it is worth it. I think the tradeoff might be different for something like the krb5 package which is maintained by two very experienced maintainers than say for Shibboleth which seems to involve several more maintainers. I m not sure that there is much git dpm can do to make things better. Even detecting loss-of-foot events might be kind of tricky.

3 October 2010

Petter Reinholdtsen: Links for 2010-10-03

22 June 2010

Craig Small: Book: Do Androids Dream of Electric Sheep?

Actually they're more interested in freedom, to be no longer slaves to humans. I don't think it's any spoiler to mention that an electric sheep does appear in the book.

This is a classic science fiction story of a dark not-so-distant future. In fact given that it's over 40 years old it could be set in some alternate now. Some disaster has occurred and many have either left earth or died. Some andriods, or andys have escaped and bounty hunters are after them.

You may not know the book by its correct title, but you may of heard the movie that is based upon it, called Blade Runner. Like most movies based on books, it takes some of the concepts and ideas of the book but in my opinion is a pale imitation.

I'm glad I read this book. It hasn't aged that much and it's very though provoking. Sci-Fi often uses some alternate reality to put a mirror up to what happens in the real world and this book is no exception.

And a token link to Debian, if you want you own electric sheep you can install the screensaver electricsheep package which draw pretty fractals on your screen.

<script defer="defer" src="http://static.zemanta.com/readside/loader.js" type="text/javascript"></script>

7 June 2010

Daniel Stone: and one more thing ...

So, I spent an hour or two this afternoon following the iPhone 4 liveblog. It all looked fairly compelling (the screen!), right up until Steve's 'and one more thing': video calling.



HELLO I'M IN 2007, CAN YOU HEAR ME
(Photo of the Nokia N800, which shipped in January 2007, from rnair.)

The moral of the story? If you want to be four years ahead of the WWDC closing bombshell, email sales@collabora.co.uk. :)


PS: The 2010 'fuck it, we're going to fivefour blades' version; we also had a six-way video call going earlier today.

6 April 2010

Michael Prokop: Remote Console feature through Java applet failing?

I m working for a customer who s using IBM blades. Remote access isn t limited to e.g. SoL but also possible through a Remote Console feature using a Java applet. After migrating one of my 32bit systems to a fresh 64bit system I suddenly couldn t use this Remote Console feature any longer. The error message was (leaving it for search engines and help other affected users):
load: class vnc.VncViewer.class not found.
java.lang.ClassNotFoundException: vnc.VncViewer.class
	at sun.plugin2.applet.Applet2ClassLoader.findClass(Applet2ClassLoader.java:152)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:303)
	at java.lang.ClassLoader.loadClass(ClassLoader.java:248)
	at sun.plugin2.applet.Plugin2ClassLoader.loadCode(Plugin2ClassLoader.java:447)
	at sun.plugin2.applet.Plugin2Manager.createApplet(Plugin2Manager.java:2880)
	at sun.plugin2.applet.Plugin2Manager$AppletExecutionRunnable.run(Plugin2Manager.java:1397)
	at java.lang.Thread.run(Thread.java:619)
Caused by: java.net.ConnectException: Network is unreachable
	at java.net.PlainSocketImpl.socketConnect(Native Method)
	at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333)
	at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195)
	at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182)
	at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366)
	at java.net.Socket.connect(Socket.java:525)
	at sun.net.NetworkClient.doConnect(NetworkClient.java:161)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:394)
	at sun.net.www.http.HttpClient.openServer(HttpClient.java:529)
	at sun.net.www.http.HttpClient.<init>(HttpClient.java:233)
	at sun.net.www.http.HttpClient.New(HttpClient.java:306)
	at sun.net.www.http.HttpClient.New(HttpClient.java:323)
	at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:860)
	at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:801)
	at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:726)
	at sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1049)
	at java.net.HttpURLConnection.getResponseCode(HttpURLConnection.java:373)
	at sun.plugin2.applet.Applet2ClassLoader.getBytes(Applet2ClassLoader.java:458)
	at sun.plugin2.applet.Applet2ClassLoader.access$000(Applet2ClassLoader.java:46)
	at sun.plugin2.applet.Applet2ClassLoader$1.run(Applet2ClassLoader.java:126)
	at java.security.AccessController.doPrivileged(Native Method)
	at sun.plugin2.applet.Applet2ClassLoader.findClass(Applet2ClassLoader.java:123)
	... 6 more
Exception: java.lang.ClassNotFoundException: vnc.VncViewer.class
The error message might not be obvious at a glance and that s why I m writing about it actually. It s NOT the:
load: class vnc.VncViewer.class not found.
why it s failing but instead the real reason for the failure is the:
java.net.ConnectException: Network is unreachable
As you can read in Debian s Bug Tracking System in bug #560044:
Netbase has recently introduced the sysctl-setting
net.ipv6.bindv6only=1 in /etc/sysctl.d/bindv6only.conf and this setting will probably be the default in squeeze. This setting breaks networking in java, and any traffic will always
result in a java.net.SocketException: Network is unreachable .
To quote /etc/sysctl.d/bindv6only.conf:
When disabled, IPv6 sockets will also be able to send and receive IPv4 traffic with addresses in the form ::ffff:192.0.2.1 and daemons listening on IPv6 sockets will also accept IPv4 connections. When IPV6_V6ONLY is enabled, daemons interested in both IPv4 and IPv6 connections must open two listening sockets.
To work around this issue you can either execute the Java process through "java -Djava.net.preferIPv4Stack=true" or to change the IPv6 behaviour system wide execute "sysctl -w net.ipv6.bindv6only=0". To make this setting permanent across reboots adjust the setting inside /etc/sysctl.d/bindv6only.conf. After applying this workaround the Remote Console should work again.

19 March 2010

MJ Ray: Co-ops at the North Somerset Initiative

On Wednesday, I went to a meeting of the Business Initiative for North Somerset for Cooperatives-SW (our regional cooperative cooperative). It was the first time anyone from the cooperative and mutual sector was present.

platform
Conference Bank of England The first speaker was Geoff Harding from the Bank of England, who talked through topics in their agents summary and related news. One interesting graph showed a steep rise in the percentage of household income being saved. Answers to questions suggested that more of that goes to mutuals and building societies, but they find it difficult to be competitive while the banks are keen to increase their balances. There was mention of employment hoarding businesses short-time working, redeploying or shutting down temporarily to keep trained workers under contract, rather than make them redundant and rehire later. People from both the Federation of Small Businesses and the Hoteliers Association made strong comments about the banks claiming to government that they are willing to lend, but still offering deeply unattractive depth-of-recession rates and terms. If the regional agents get details of such cases, they pass them to the central bank. South West Regional Development Agency The second main speaker was Ann O Driscoll, who covers business development for the West of England (what many people call CUBA Councils that Used to Be Avon). She introduced their four strategic priorities:-

  1. Low Carbon Economy apparenly our region has good wind, wave and solar experience. However, Vestas were mentioned and I know that when Vestas closed the UK s only wind turbine blade factories in August 2009, the RDAs were criticised for not acting against the subsidy-chasing, socially irresponsible conduct of Vestas and related companies continue to worry workers. If Vestas is one of the praised companies, I wonder whether we re attracting sustainable work to the region. On a related note, North Somerset Council and NS Enterprise Agency are organising a Climate Change forum on 1st April, but it s at Cadbury House Hotel, which is awkward to get to except by car: no footpath to the door, a mile-and-a-half walk along a busy road from a train station, bad roads for biking, wheelbending speedhumps on the drive, I can t remember if there is bicycle parking and its website doesn t say
  2. Successful Businesses SWRDA funds our rather poor Business Link service. Does someone in the South West have some good news about Business Link? If so, please leave a comment on this article.
  3. Prosperous Places intervention in areas like south Bristol or central Weston.
  4. United Approach co-ordinating with other regional groups.
The main activities introduced were: The questions were fascinating, except for mine. The best one was probably about the forthcoming election and the Conservatives pledge to shut down the RDAs and Business Links. The answer was that much of the work will still need to be done somehow, so it s more a question of who will do it, rather than what the name on the door says. So I don t think axing RDAs is going to achieve more than shuffling people between organisations. When allowed to ask a question, I got too excited and ranted a bit about how the Co-operative Group s co-operative enterprise hub is doing work that Business Link should have been funding, but the RDA doesn t understand social enterprise and treats it as a ghetto service. I ll try to take that up in a more coherent form later. Overview I think Cooperatives-SW were invited to the North Somerset Initiative largely because of pressure from software.coop for co-ops to be represented better in the Local Strategic Partnership, which has treated us badly. NSI holds one of the three business places on the partnership board, with the other two being held by NSEA and Bristol International Airport. I think it s a bit off that a private company and an enterprise agency are both represented by other groups and also have their own board seats while co-ops don t even have a collective seat, so that pressure for a representative organisation for co-operatives and mutuals to have a board place (in line with national government guidance) should continue, useful though the NSI meetings seem to be. The other thing which would really help is some funding for co-operative business development specialists to work in North Somerset at strategic tasks like this, instead of it being left to ordinary workers from other sectors. I don t know where that funding will come from and until then, I ll continue to try my best. We need to make sure that local strategy at least does no harm to the sector.

10 February 2010

Lucas Nussbaum: Debian Squeeze, Ubuntu Lucid and Ruby 1.9.2: NO!

Apparently, there s some hype in the Ubuntu community about Ruby 1.9.2, so let s clarify: it would be totally irresponsible to try to ship Ruby 1.9.2 in Debian Squeeze or Ubuntu Lucid. Ruby 1.9.2 has not been released yet: That s where we stand now. Even if surprises are possible, it s very unlikely that Ruby 1.9.2 will be released before Lucid s release (or, what the real requirement is, before Lucid s freeze). So we are sticking with 1.8.7 and 1.9.1 (1.9.0 will go away before the release). If you can point to specific commits that fix real bugs in 1.9.1, and could be backported to the Debian package, feel free to notify the Debian Ruby maintainers.

16 January 2010

C.J. Adams-Collier: AoE root for KVM guests

So. I m trying to get familiar with libvirt and friends. To this end, I ve set up a Lucid virtual machine booting from PXE into an initrd environment which does a pivot_root to an AoE block device. The #virt channel on irc.oftc.net told me that in order to have libvirt provide PXE capability, I would have to install a recent version of libvirt. I built version 0.7.5-3 from sid on my karmic laptop and it seems to be working okay. I decided to set up the pxe root directoy in /var/lib/tftproot just because that s what the example code had in it. I had to manually configure a virtual network. Here is the XML config file:
$ sudo virsh net-dumpxml netboot
<network>
  <name>netboot</name>
  <uuid>81ff0d90-c91e-6742-64da-4a736edb9a9b</uuid>
  <forward mode='nat'/>
  <bridge name='virbr1' stp='off' delay='1' />
  <domain name='example.com'/>
  <ip address='192.168.123.1' netmask='255.255.255.0'>
    <tftp root='/var/lib/tftproot' />
    <dhcp>
      <range start='192.168.123.2' end='192.168.123.254' />
      <bootp file='pxelinux.0' />
    </dhcp>
  </ip>
</network>
This, of course, depends on the pxelinux.0 file. Luckily, this is packaged up in syslinux and can be installed with a simple
$ sudo apt-get install syslinux
$ sudo mkdir /var/lib/tftproot
$ sudo cp /usr/lib/syslinux/pxelinux.0 /var/lib/tftproot
I had to create a pxelinux config file for the virtual machine (indexed by mac address). Note that I put a console=ttyS0,115200 argument on the kernel command line so that I can attach to the serial port from the host system for copy/paste debugging. Also of importance is the root=/dev/etherd/e0.1p1 argument, specifying which block device we ll be doing the pivot_root to eventually.
$ mkdir /var/lib/tftproot/pxelinux.cfg/
$ cat /var/lib/tftproot/pxelinux.cfg/01-52-54-00-44-34-67
DEFAULT linux
LABEL linux
SAY Now booting the kernel from PXELINUX...
KERNEL vmlinuz-lucid0
APPEND ro root=/dev/etherd/e0.1p1 console=ttyS0,115200 initrd=initrd.img-lucid0
I decided to use the karmic kernel for lucid initially. I ll eventually switch over to the lucid kernel ;)
$ sudo cp /boot/vmlinuz-2.6.31-17-generic /var/lib/tftproot/vmlinuz-lucid0
I copied /etc/initramfs-tools to ~/tmp/lucid so that I didn t mess up the system initrd scripts:
$ mkdir -p ~/tmp/lucid && cp -r /etc/initramfs-tools ~/tmp/lucid/
Since mkinitramfs doesn t currently have a system for AoE root, I had to do a bit of fiddling. I copied the NFS root boot script and made a couple of modifications.
$ diff -u /usr/share/initramfs-tools/scripts/nfs ~/tmp/lucid/initramfs-tools/scripts/aoe
--- /usr/share/initramfs-tools/scripts/nfs	2008-06-23 23:10:21.000000000 -0700
+++ /home/cjac/tmp/lucid/initramfs-tools/scripts/aoe	2010-01-15 14:56:28.098298027 -0800
@@ -5,59 +5,25 @@
 retry_nr=0
 # parse nfs bootargs and mount nfs
-do_nfsmount()
+do_aoemount()
  
-
 	configure_networking
-	# get nfs root from dhcp
-	if [ "x$ NFSROOT " = "xauto" ]; then
-		# check if server ip is part of dhcp root-path
-		if [ "$ ROOTPATH#*: " = "$ ROOTPATH " ]; then
-			NFSROOT=$ ROOTSERVER :$ ROOTPATH 
-		else
-			NFSROOT=$ ROOTPATH 
-		fi
-
-	# nfsroot=[<server-ip>:]<root-dir>[,<nfs-options>]
-	elif [ -n "$ NFSROOT " ]; then
-		# nfs options are an optional arg
-		if [ "$ NFSROOT#*, " != "$ NFSROOT " ]; then
-			NFSOPTS="-o $ NFSROOT#*, "
-		fi
-		NFSROOT=$ NFSROOT%%,* 
-		if [ "$ NFSROOT#*: " = "$NFSROOT" ]; then
-			NFSROOT=$ ROOTSERVER :$ NFSROOT 
-		fi
-	fi
+        ip link set up dev eth0
-	if [ -z "$ NFSOPTS " ]; then
-		NFSOPTS="-o retrans=10"
-	fi
+        ls /dev/etherd/
-	[ "$quiet" != "y" ] && log_begin_msg "Running /scripts/nfs-premount"
-	run_scripts /scripts/nfs-premount
-	[ "$quiet" != "y" ] && log_end_msg
+        echo > /dev/etherd/discover
-	if [ $ readonly  = y ]; then
-		roflag="-o ro"
-	else
-		roflag="-o rw"
-	fi
+        ls /dev/etherd/
-	nfsmount -o nolock $ roflag  $ NFSOPTS  $ NFSROOT  $ rootmnt 
+        mount $ ROOT  $ rootmnt 
  
-# NFS root mounting
+# AoE root mounting
 mountroot()
  
-	[ "$quiet" != "y" ] && log_begin_msg "Running /scripts/nfs-top"
-	run_scripts /scripts/nfs-top
-	[ "$quiet" != "y" ] && log_end_msg
-
-	modprobe nfs
-	# For DHCP
-	modprobe af_packet
+	modprobe aoe
 	# Default delay is around 180s
 	# FIXME: add usplash_write info
@@ -67,17 +33,13 @@
 		delay=$ ROOTDELAY 
 	fi
-	# loop until nfsmount succeds
+	# loop until aoemount succeds
 	while [ $ retry_nr  -lt $ delay  ] && [ ! -e $ rootmnt $ init  ]; do
 		[ $ retry_nr  -gt 0 ] && \
-		[ "$quiet" != "y" ] && log_begin_msg "Retrying nfs mount"
-		do_nfsmount
+		[ "$quiet" != "y" ] && log_begin_msg "Retrying AoE mount"
+		do_aoemount
 		retry_nr=$(( $ retry_nr  + 1 ))
 		[ ! -e $ rootmnt $ init  ] && /bin/sleep 1
 		[ $ retry_nr  -gt 0 ] && [ "$quiet" != "y" ] && log_end_msg
 	done
-
-	[ "$quiet" != "y" ] && log_begin_msg "Running /scripts/nfs-bottom"
-	run_scripts /scripts/nfs-bottom
-	[ "$quiet" != "y" ] && log_end_msg
  
(below is the full file in case udiff is less convenient)
$ cat ~/tmp/lucid/initramfs-tools/scripts/aoe
# NFS filesystem mounting			-*- shell-script -*-
# FIXME This needs error checking
retry_nr=0
# parse nfs bootargs and mount nfs
do_aoemount()
 
	configure_networking
        ip link set up dev eth0
        ls /dev/etherd/
        echo > /dev/etherd/discover
        ls /dev/etherd/
        mount $ ROOT  $ rootmnt 
 
# AoE root mounting
mountroot()
 
	modprobe aoe
	# Default delay is around 180s
	# FIXME: add usplash_write info
	if [ -z "$ ROOTDELAY " ]; then
		delay=180
	else
		delay=$ ROOTDELAY 
	fi
	# loop until aoemount succeds
	while [ $ retry_nr  -lt $ delay  ] && [ ! -e $ rootmnt $ init  ]; do
		[ $ retry_nr  -gt 0 ] && \
		[ "$quiet" != "y" ] && log_begin_msg "Retrying AoE mount"
		do_aoemount
		retry_nr=$(( $ retry_nr  + 1 ))
		[ ! -e $ rootmnt $ init  ] && /bin/sleep 1
		[ $ retry_nr  -gt 0 ] && [ "$quiet" != "y" ] && log_end_msg
	done
 
There was also a small modification to the initramfs.conf file:
$ diff -u /etc/initramfs-tools/initramfs.conf ~/tmp/lucid/initramfs-tools/initramfs.conf
--- /etc/initramfs-tools/initramfs.conf	2008-07-08 18:37:42.000000000 -0700
+++ /home/cjac/tmp/lucid/initramfs-tools/initramfs.conf	2010-01-15 14:33:38.088295207 -0800
@@ -47,14 +47,16 @@
 #
 #
-# BOOT: [ local   nfs ]
+# BOOT: [ local   nfs   aoe]
 #
 # local - Boot off of local media (harddrive, USB stick).
 #
 # nfs - Boot using an NFS drive as the root of the drive.
 #
+# aoe - Boot using an AoE drive as the root of the drive.
+#
-BOOT=local
+BOOT=aoe
 #
 # DEVICE: ...
I also needed to add aoe to the list of modules included in the initramfs:
$ echo aoe >> ~/tmp/lucid/initramfs-tools/modules
In order to generate the initrd.img file from this new config, I ran the following:
$ sudo mkinitramfs -d ~/tmp/lucid/initramfs-tools/ -o /var/lib/tftproot/initrd.img-lucid0
I created a lucid VM by installing from the desktop install disk. You can grab the ISO here: http://cdimage.ubuntu.com/daily-live/current/ I ll leave the creation of the virtual machine and installation as an exercise for the reader. I put the filesystem on an lvm volume group called vg0 in a logical volume called lucid0 (ie, /dev/vg0/lucid0). At this point, I created a new virtual machine called lucid0. Here is the xml for the domain:
$ sudo virsh dumpxml lucid0
<domain type='kvm' id='1'>
  <name>lucid0</name>
  <uuid>96fbad21-4f25-5700-ddd8-1a565c7170ee</uuid>
  <memory>524288</memory>
  <currentMemory>524288</currentMemory>
  <vcpu>1</vcpu>
  <os>
    <type arch='x86_64' machine='pc-0.11'>hvm</type>
    <boot dev='network'/>
  </os>
  <features>
    <pae/>
  </features>
  <clock offset='localtime'/>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/kvm</emulator>
    <interface type='network'>
      <mac address='52:54:00:44:34:67'/>
      <source network='netboot'/>
      <target dev='vnet0'/>
    </interface>
    <serial type='pty'>
   <source path='/dev/pts/4'/>
      <target port='0'/>
    </serial>
  <console type='pty' tty='/dev/pts/4'>
   <source path='/dev/pts/4'/>
      <target port='0'/>
    </console>
    <input type='tablet' bus='usb'/>
    <input type='mouse' bus='ps2'/>
    <graphics type='vnc' port='5901' autoport='yes' listen='127.0.0.1' keymap='en-us'/>
    <sound model='es1370'/>
    <video>
      <model type='cirrus' vram='9216' heads='1'/>
    </video>
  </devices>
</domain>
Now we re ready to start the AoE target and launch the virtual machine. If you don t have vblade installed, do so now:
$ sudo apt-get install vblade
Start the target up with the following command:
$ sudo vbladed 0 1 virbr1 /dev/vg0/lucid0
Now, if all goes well, you should be able to watch the virtual machine boot up and do its thing like so:
$ sudo virsh start lucid0 && sudo screen -S lucid0 sudo virsh ttyconsole lucid0 115200
If you get errors about /dev/etherd/e0.1p1 not existing (these might look like this):
Begin: Retrying AoE mount ...
err         discover    interfaces  revalidate  flush
err         discover    interfaces  revalidate  flush
mount: mounting /dev/etherd/e0.1p1 on /root failed: No such file or directory
Done.
then you might want to try restarting vbladed like this:
$ sudo kill -9 ps auwx grep vblade grep -v grep awk ' print $2 ' && sudo vbladed 0 1 virbr1 /dev/vg0/lucid0
So. Now you should have a lucid gdm in your virt-manager console. Any questions? #virt on irc.oftc.net
<script>//<script language="javascript" src="http://reddit.com/button.js?t=1"></script>

7 January 2010

Marco d'Itri: What I want from a blade chassis switch

Why link aggregation? It is the only simple solution for having complete fault tolerance without configuring the STP in the servers (which is annoying, because Linux does not implement portfast) or even worse OSPF. That's it. I do not need routing protocols or even L3 routing at all, just a simple switch. But apparently the vendors would rather sell me an high end device...

1 January 2010

C.J. Adams-Collier: SAN configuration (AoE)

With the help of a couple of friends, we ve put a 4.5T RAID-5 machine on our network and I m trying to figure out how to share the storage with the rest of the hosts. In the past, I have used NFS and CIFS/Samba to provide access to remote hosts. This has generally worked okay so long as the server stays online. I don t know if the results are going to be much different, but I am now trying a different approach. I plan to run an iSCSI server, and I ve already configured AoE (ATA over Ethernet). I ve exported a block device on the network segment and mounted it on a remote host. This was pretty easy to configure. There is a bit of documentation on the internet already, but I ll give another quick overview. I gave the storage server the unoriginal name san0 . This host is running debian lenny. I am testing the configuration from my debian sid development host, which has the similarly unoriginal name dev0 . So, think server when you see san0 and client when you see dev0 . I assume that you ve already got an LVM volume group set up. Mine is called vg0 . Correct the following examples to account for any differences. You can use disk partitions instead of LVM logical volumes.
Create a logical volume to be exported: cjac@san0:~$ sudo lvcreate /dev/vg0 -n e0.1 -L 5G Load the AoE kernel module: cjac@san0:~$ sudo modprobe aoe Install the package containing the vblade block device export server: cjac@san0:~$ sudo apt-get install vblade Export the block device. Note that the ethernet bridge on which I export the device is called loc : cjac@san0:~$ sudo vbladed 0 1 loc /dev/vg0/e0.1 Install the AoE discovery tools on the client: cjac@dev0:~$ sudo apt-get install aoetools Load the AoE kernel module: cjac@dev0:~$ sudo modprobe aoe Probe for exported AoE devices: cjac@dev0:~$ sudo aoe-discover Verify that our exported device was discovered: cjac@dev0:~$ test -e /dev/etherd/e0.1 && echo "yep"
yep

You can now treat /dev/etherd/e0.1 as you would any other block device. You can format it directly, or partition it and format a partition, use it as a device in your software RAID array, use it as swap space (ha), or something completely different. Now to figure out this iSCSI stuff
<script>//<script language="javascript" src="http://reddit.com/button.js?t=1"></script>

24 December 2009

Clint Adams: Muppetry

I'm a notoriously late adopter of new technologies. Sure, if I think that something is potentially a good idea, I'll go for it, but for the most part, I assume that things are stupid. I refused to use the WWW for many years, thinking it was idiotic. Now I use the WWW. I find microblogging to be inane and narcissistic at best, but I have derived mild amusement from Shaq on Twitter. Microsoft Windows is something I thought would be ridiculously unsuccessful, because it didn't do anything I would find useful. The list goes on and on, and includes something known in the popular vernacular as configuration management . I've been hearing about BladeLogic and Opsware for quite some time, and idly wondering who the incompetent fools are that need something like this. Back in my day we could manage 100 servers with relative ease; no centralized authentication (because it was a running argument about whether NIS sucked more than NIS+ or vice-versa), no cfengine, no prebuilt server images or any of that jazz. If, for some reason, we needed to make the same change to all servers at once, or some subset thereof, we would just script it. So when one of my acquaintances started raving about Puppet, I assumed it was equally pointless. We'll call him Downs. Downs exhibits a pattern of behavior where he will rave about something for a few weeks, sometimes without having even tried it first, and then eventually become disillusioned with it and start ranting with the same fervor with which he had insisted that whatever fad of the week was the greatest thing ever. It came as no surprise to me when he fell out of love with Puppet and started raving about how much better Chef is. In case you're wondering, he still has not tried Chef to this day. Finally someone convinced me of the value of Puppet. I do not think it is the Way and the Light, but I do see a few situations wherein it makes sense to sacrifice efficiency, flexibility, sanity, and system resources to gain a way of building a near-identical machine from scratch. Now, beyond the flaws inherent to a solution like this, Puppet does have some annoying flaws which I find unnecessary, and which Downs ranted about back when I had absolutely no idea what he was talking about. I disregarded his alleged preference for Chef, but I have heard intelligent people extoll the virtues of Chef, and to a lesser extent Bcfg2. So I took a very brief look at each of those, and was pretty much horrified by what I found. If I need something Puppet-like in the very near future, I will probably be choosing Puppet. I have no intention of running Puppet on machines I manage all by myself though. For that eventuality I am NIHing something with what I consider a better design. The current code is published, but it does not do very much, and requires linking with libdpkg.a which does not exist in any package (but Guillem's working on that, I think). I have no idea whether or not I'll have the motivation to finish it all by myself, but it is Free Software. This post is intentionally weak on details, or deets as the kids call them.

23 November 2009

Mart&iacute;n Ferrari: Movies

Just wanted to share comments on some movies I've watched recently. Tags: Planet Lugfi, Planet Debian

13 October 2009

Russell Coker: New Servers a non-virtual Cloud

NewServers.com [1] provides an interesting service. They have a cloud computing system that is roughly comparable to Amazon EC2, but for which all servers are physical machines (blade servers with real disks). This means that you get the option of changing between servers and starting more servers at will, but they are all physical systems so you know that your system is not going to go slow because someone else is running a batch job. New Servers also has a bandwidth limit of 3GB per hour with $0.10 per GB if you transfer more than that. Most people should find that 3GB/hour is enough for a single server. This compares to EC2 where you pay $0.10 per GB to receive data and $0.17 to transmit it. If you actually need to transmit 2100GB per month then the data transfer fees from EC2 would be greater than the costs of renting a server from New Servers. When running Linux the EC2 hourly charges are (where 1ECU is provides the equivalent CPU capacity of a 1.0-1.2 GHz 2007 Opteron or 2007 Xeon processor):


NAMECostDesc
Small$0.101.7G, 160G, 32bit, 1ECU, 1core
Large$0.207.5G, 850G, 64bit, 4ECU, 2core
Extra Large$0.4015G, 1690G, 64bit, 8ECU, 4core
High CPU Medium$0.201.7G, 350G, 32bit, 5ECU, 2core
High CPU Extra Large$0.807G, 1690G, 64bit, 20ECU, 5core
The New Servers charges are:


NAMECostDesc
Small$0.111G, 36G, 32bit, Xeon 2.8GHz
Medium$0.172G, 2*73G, 32bit, 2*Xeon 3.2GHz
Large$0.254G, 250G, 64bit, E5405 Quad Core 2Ghz
Jumbo$0.388G, 2*500G, 64bit, 2 x E5405 Quad Core 2Ghz
Fast$0.534G, 2*300G, 64bit, E5450 Quad Core 3Ghz
The New Servers prices seem quite competitive with the Amazon prices. One down-side to New Servers is that you have to manage your own RAID, the cheaper servers have only a single disk (bad luck if it fails). The better ones have two disks and you could setup your own RAID. Of course the upside of this is that if you want a fast server from New Servers and you don t need redundancy then you have the option of RAID-0 for better performance. Also I don t think that there is anything stopping you from running Xen on a New Servers system. So you could have a bunch of Xen images and a varying pool of Dom0s to run them on. If you were to choose the Jumbo option with 8G of RAM and share it among some friends with everyone getting a 512M or 1G DomU then the cost per user would be a little better than Slicehost or Linode while giving better management options. One problem I sometimes have with virtual servers for my clients is that the disk IO performance is poorer than I expect. When running the server that hosts my blog (which is shared with some friends) I know the performance requirements of all DomUs and can diagnose problems quickly. I can deal with a limit on the hardware capacity, I can deal with trading off my needs with the needs of my friends. But having a server just go slow, not knowing why, and having the hosting company say I can move you to a different physical server (which may be better or worse) doesn t make me happy. I first heard about New Servers from Tom Fifield s LUV talk about using EC2 as a cluster for high energy physics [2]. According to the detailed analysis Tom presented using EC2 systems on demand can compete well with the costs of buying Dell servers and managing them yourself, EC2 wins if you have to pay Japanese prices for electricity but if you get cheap electricity then Dell may win. Of course a major factor is the amount of time that the servers are used, a cluster that is used for short periods of time with long breaks in between will have a higher cost per used CPU hour and thus make EC2 a better option.

6 October 2009

Steve Kemp: Poppa's got a brand new bang.

Recently I posted a brief tool for managing "dotfile collections". This tool was the rationalisation of a couple of adhoc scripts I already used, and was a quick hack written in nasty bash. I've updated my tool so that it is coded in slightly less nasty Perl. You can find the dotfile-manager repository online now. This tool works well with my dotfile repository, and the matching, but non-public dotfiles-private repository. I'm suspect that this post might flood a couple of feed agregators, because I've recently my chronicle blog compiler with a new release. This release has updated all the supplied themes/templates such that they validate strictly, and as part of that I had to edit some of my prior blog entries to remove bogus HTML markup. (Usually simple things suck as failing to escape & characters correctly, or using "[p][/P]" due to sloppy shift-driving.) I should probably update the way I post entries, and use markdown or textile instead of manually writing HTML inside Emacs, but the habit has been here for too long. Even back when I used wordpress I wrote my entries in HTML... Finally one other change in the most recent chronicle release is that the "mail-scanning.com theme" has been removed, as the service itself is no longer available. But all is not lost. ObFilm: Blade II

18 September 2009

Daniel Kahn Gillmor: Tools should be distinct from Services

Modern daemon implementations can be run in a variety of ways, in a range of contexts. The daemon software itself can be a useful tool in environments where the associated traditional system service is neither needed nor desired. Unfortunately, common debian packaging practice has a tendency to conflate the two ideas, leading to some potentially nasty problems where only the tool itself is needed, but the system service is set up anyway. How would i fix it? i'd suggest that we make a distinction between packages that provide tools and packages that provide system services. A package that provides the system service foo would need to depend on a package that provides the tool foo. But the tool foo should be available through the package manager without setting up a system service automatically. Bad Examples Here are some examples of this class of problem i've seen recently:
akonadi-server depends on mysql-server
akonadi is a project to provide extensible, cross-desktop storage for PIM data. It is a dependency of many pieces of the modern KDE4 desktop. Its current implementation relies on a private instantiation of mysqld, executed directly by the end user whose desktop is running. This means that a sysadmin who installs a graphical calendar application suddenly now has (in addition to the user-instantiated local mysqld running as the akonadi backend) a full-blown system RDBMS service running and potentially consuming resources on her machine.Wouldn't it be better if the /usr/sbin/mysqld tool itself was distinct from the system service?
puppetmaster depends on puppet
Puppet is a powerful framework for configuration management. A managed host installs the puppet package, which invokes a puppetd service to reconfigure the managed host by talking to a centralized server on the network. The central host installs the the puppetmaster package, which sets up a system puppetmasterd service. puppetmaster depends on puppet to make use of some of the functionality available in the package. But this means that the central host now has puppetd running, and is being configured through the system itself! While some people may prefer to configure their all-powerful central host through the same configuration management system, this presents a nasty potential failure mode: if the configuration management goes awry and makes the managed nodes inaccessible, it could potentially take itself out too. Shouldn't the puppet tools be distinct from the puppetd system service?
monkeysphere Build-Depends: openssh-server
The Monkeysphere is a framework for managing SSH authentication through the OpenPGP Web-of-Trust (i'm one of the authors). To ensure that the package interacts properly with the OpenSSH implementation, the monkeysphere source ships with a series of test suites that exercise both sshd and ssh.This means that anyone trying to build the monkeysphere package must pull in openssh-server to satisfy the build-depends, thereby inadvertently starting up a potentially powerful network service on their build machine and maybe exposing it to remote access that they didn't intend. Wouldn't it be better if the /usr/sbin/sshd tool was available without starting up the ssh system service?
Good ExamplesHere are some examples of debian packaging that already understand and implement this distinction in some way:
apache2.2-bin is distinct from apache2-mpm-foo
Debian's apache packaging recently transitioned to split the apache tool into a separate package (apache2.2-bin) from the packages that provide an apache system service (apache2-mpm-foo). So apache can now be run by a regular user, for example as part of gnome-user-share.
git-core is distinct from git-daemon-run
git-core provides the git daemon subcommand, which is a tool capable of providing network access to a git repo. However, it does not set up a system service by default. The git-daemon-run package provides a way for an admin to quickly set up a "typical" system service to offer networked git access.
vblade is distinct from vblade-persist
vblade offers a simple, powerful utility to export a single file or block device as an AoE device. vblade-persist (disclosure: i wrote vblade-persist) provides a system service to configure exported devices, supervise them, keep them in service across system reboots, etc.
Tools are not Services: a ProposalLet's consider an existing foo package which currently provides: I suggest that this should be split into two packages. foo-bin would contain the tool itself, and foo (which Depends: foo-bin) would contain the service configuration information, postinst scripts to set it up and start it, etc. This would mean that every instance of apt-get install foo in someone's notes would retain identical semantics to the past, but packages which need the tool (but not the service) can now depend on foo-bin instead, leaving the system with fewer resources consumed, and fewer unmanaged services running. For brand new packages (which don't have to account for a legacy of documentation which says apt-get install foo), i prefer the naming convention that the foo package includes the tool itself (after all, it does provide /usr/sbin/foo), and foo-service sets up a standard system service configured and running (and depends on foo). Side Effects This proposal would fix the problems noted above, but it would also offer some nice additional benefits. For example, it would make it easier to introduce an alternate system initialization process by creating alternate service packages (while leaving the tool package alone). Things like runit-services could become actual contenders for managing the running services, without colliding with the "stock" services installed by the bundled tool+service package. The ability to create alternate service packages would also mean that maintainers who prefer radically different configuration defaults could offer their service instantiations as distinct packages. For example, one system-wide foo daemon (foo-service) versus a separate instance of the foo daemon per user (foo-peruser-service) or triggering a daemon via inetd (foo-inetd-service). One negative side effect is that it adds some level of increased load on the package maintainers if the service package and the tool package both come from the same source package, then some work needs to be done to figure out how to split them. If the tool and service packages have separate sources (like vblade and vblade-persist) then some coordinating footwork needs to be done between the two packages when any incompatible changes happen. Questions? Disagreements? Next Steps? Do you disagree with this proposal? If so, why? Unfortunately (to my mind), Debian has a long history of packages which conflate tools with services. Policy section 9.3.2 can even be read as deliberately blurring the line (though i'd argue that a cautious reading suggests that my proposal is not in opposition to policy):
Packages that include daemons for system services should place scripts in /etc/init.d to start or stop services at boot time or during a change of runlevel.
I feel like this particular conflict must have been hashed out before at some level are there links to definitive discussion that i'm missing? Is there any reason we shouldn't push in general for this kind of distinction in debian daemon packages? Tags: daemons, packaging, policy

16 May 2009

Joey Hess: one of those days

Some spring days just seem full to bursting with unusual experiences. I spent the night at the yurt and in just four hours today: big red axe Thanks to Iain M from Melbourne who writes, "We have never met. I have used your software and gained insight from your blog, over the years. So thanks. Also, I have never bought someone an axe before. This is cool." I forgot I had that in my wishlist, but now I can split wood with ease, when I run out of Amazon packaging material to burn.

Steve Kemp: Humans don't drink blood.

I've said it multiple times, but all mailing list managers suck. Especially mailman. (On that topic SELinux is nasty, Emacs is the one true editor, and people who wear furry boots are silly.) Having setup some new domains I needed a mailing list manager, and had to re-evaluate the available options. Mostly I want something nice and simple, that doesn't mess around with character sets, that requires no fancy deamons, and has no built in archiving solution. Really all we need is three basic operations: Using pipes we can easily arrange for a script to be invoked for different options:
# Aliases for 'list-blah' mailing list.
#
^list-blah-subscribe$:   " /sbin/skxlist --list=list-blah --domain=example.com --subscribe"
^list-blah-unsubscribe$: " /sbin/skxlist --list=list-blah --domain=example.com --unsubscribe"
^list-blah$:             " /sbin/skxlist --list=list-blah --domain=example.com --post"
The only remaining concerns are security related. The most obvious concern is that the script named will be launched by the mailserver user (Debian-exim in my case). That suggests that any files it creates (such as a list of email addresses - i.e. list members) will be owned by that user. That can be avoided with setuid-fu and having the mailing list manager be compiled. But compiled code? When there are so many lovely Perl modules out there? No chance! In conclusion, if you're happy for the exim user to own and be able to read the list data then you can use skxlist. It is in live use, allows open lists, member-only lists, and admin-only lists. It will archive messages in a maildir, but they are ignored for you to use if you see fit. List options are pretty minimal, but I do a fair amount of sanity checking and I see no weaknesses except for the use of the Debian-exim UID. ObFilm: Blade

Next.

Previous.